Peter Lynn (Editor). Advances in Longitudinal Survey Methodology
In: The public opinion quarterly: POQ, Band 86, Heft 2, S. 421-423
ISSN: 1537-5331
42 Ergebnisse
Sortierung:
In: The public opinion quarterly: POQ, Band 86, Heft 2, S. 421-423
ISSN: 1537-5331
In: Journal of survey statistics and methodology: JSSAM, Band 10, Heft 4, S. 945-978
ISSN: 2325-0992
Abstract
Respondents who break off from a web survey prior to completing it are a prevalent problem in data collection. To prevent breakoff bias, it is crucial to keep as many diverse respondents in a web survey as possible. As a first step of preventing breakoffs, this study aims to understand breakoff and the associated response behavior. We analyze data from an annual online survey using dynamic survival models and ROC analyses. We find that breakoff risks between respondents using mobile devices versus PCs do not differ at the beginning of the questionnaire, but the risk for mobile device users increases as the survey progresses. Very fast respondents as well as respondents with changing response times both have a higher risk of quitting the questionnaire, compared to respondents with slower and steady response times. We conclude with a discussion of the implications of these findings for future practice and research in web survey methodology.
In: Survey methods: insights from the field, S. 1-9
ISSN: 2296-4754
This article investigates how different strategies used by interviewers when recording interviewer observations relate to observation accuracy. Before conducting interviews in a refreshment sample of the general population for the German PASS panel study, interviewers were asked to observe one key target variable of the study -- whether a household is at risk of poverty or not -- for all sampled households. In addition, interviewers recorded what strategies they had used to make their observations. For responding households, we assessed the accuracy of the observation by comparing it to an actual survey measure of poverty risk. Separate multilevel regression models attempting to explain the observed interviewer variance in observation accuracy for two types of households (those at risk and not at risk of poverty) using case-level strategies and aggregate interviewer tendencies reveal unique strategies that result in more accurate observations for each type of household. An aggregate fixed-effects model then reveals strategies that prove to be effective regardless of the type of household when accounting for unobserved interviewer heterogeneity.
In: Structural equation modeling: a multidisciplinary journal, Band 23, Heft 1, S. 45-53
ISSN: 1532-8007
In: The public opinion quarterly: POQ, Band 77, Heft 2, S. 522-548
ISSN: 1537-5331
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 77, Heft 2, S. 522-521
ISSN: 0033-362X
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 74, Heft 5, S. 1004-1004
ISSN: 0033-362X
In: The public opinion quarterly: POQ, Band 74, Heft 5, S. 1004-1026
ISSN: 1537-5331
In: The public opinion quarterly: POQ, Band 87, Heft S1, S. 575-601
ISSN: 1537-5331
Abstract
Among the numerous explanations that have been offered for recent errors in pre-election polls, selection bias due to non-ignorable partisan nonresponse bias, where the probability of responding to a poll is a function of the candidate preference that a poll is attempting to measure (even after conditioning on other relevant covariates used for weighting adjustments), has received relatively less focus in the academic literature. Under this type of selection mechanism, estimates of candidate preferences based on individual or aggregated polls may be subject to significant bias, even after standard weighting adjustments. Until recently, methods for measuring and adjusting for this type of non-ignorable selection bias have been unavailable. Fortunately, recent developments in the methodological literature have provided political researchers with easy-to-use measures of non-ignorable selection bias. In this study, we apply a new measure that has been developed specifically for estimated proportions to this challenging problem. We analyze data from 18 different pre-election polls: 9 different telephone polls conducted in 8 different states prior to the US presidential election in 2020, and nine different pre-election polls conducted either online or via telephone in Great Britain prior to the 2015 general election. We rigorously evaluate the ability of this new measure to detect and adjust for selection bias in estimates of the proportion of likely voters that will vote for a specific candidate, using official outcomes from each election as benchmarks and alternative data sources for estimating key characteristics of the likely voter populations in each context.
In: Journal of survey statistics and methodology: JSSAM, Band 9, Heft 1, S. 141-158
ISSN: 2325-0992
AbstractWeighted generalized estimating equations (GEEs) are popular for the marginal analysis of longitudinal survey data. This popularity is due to the ability of these estimating equations to provide consistent regression parameter estimates and corresponding standard error estimates as long as the population mean and survey weights are correctly specified. Although the data analyst must incorporate a working correlation structure within the weighted GEEs, this structure need not be correctly specified. However, accurate modeling of this structure has the potential to improve regression parameter estimation (i.e., reduce standard errors) and therefore, the selection of a working correlation structure for use within GEEs has received considerable attention in standard longitudinal data analysis settings. In this article, we describe how correlation selection criteria can be extended for use with weighted GEE in the context of analyzing longitudinal survey data. Importantly, we provide and demonstrate an R function that we have created for such analyses. Furthermore, we discuss correlation selection in the context of using existing software that does not have this explicit capability. The methods are demonstrated via the use of data from a real survey in which we are interested in the mean number of falls that elderly individuals in a specific subpopulation experience over time.
In: Survey methods: insights from the field, S. 1-10
ISSN: 2296-4754
The importance of correctly accounting for complex sampling features when generating finite population inferences based on complex sample survey data sets has now been clearly established in a variety of fields, including those in both statistical and non-statistical domains. Unfortunately, recent studies of analytic error have suggested that many secondary analysts of survey data do not ultimately account for these sampling features when analyzing their data, for a variety of possible reasons (e.g., poor documentation, or a data producer may not provide the information in a public-use data set). The research in this area has focused exclusively on analyses of household survey data, and individual respondents. No research to date has considered how analysts are approaching the data collected in establishment surveys, and whether published articles advancing science based on analyses of establishment behaviors and outcomes are correctly accounting for complex sampling features. This article presents alternative analyses of real data from the 2013 Business Research and Development and Innovation Survey (BRDIS), and shows that a failure to account for the complex design features of the sample underlying these data can lead to substantial differences in inferences about the target population of establishments for the BRDIS.
In: Journal of survey statistics and methodology: JSSAM, S. smw024
ISSN: 2325-0992
In: Public opinion quarterly: journal of the American Association for Public Opinion Research, Band 78, Heft 4, S. 795-831
ISSN: 0033-362X
In: Chapman & Hall/CRC statistics in the social and behavioral sciences series
In: A Chapman & Hall Book
World Affairs Online
In: Journal of survey statistics and methodology: JSSAM, Band 11, Heft 4, S. 784-805
ISSN: 2325-0992
Abstract
Growing reluctance of households to participate in surveys has led to a variety of methodological efforts to combat this phenomenon. Several organizations employ case prioritization in a responsive survey design framework, dedicating increased effort to specific subgroups of sampled cases during certain data collection phases. For example, some surveys may prioritize subgroups defined by age and/or race/ethnicity if balanced response rates across these subgroups are important for minimizing nonresponse bias. Unfortunately, no methodological studies to date have identified optimal approaches for applying this increased effort to prioritized cases. This study experimentally examined three alternative methods for case prioritization in the National Survey of Family Growth: simply flagging the cases to receive increased effort in the sample management system (a "standard" method), developing tailored approaches to working the prioritized cases, and no prioritization (a "control" method). In the "tailored" method, which was designed to provide the interviewers with more guidance than simple "flagging," interviewers worked with their supervisors to develop tailored strategies for how to best work each case within the prioritized subgroup. We find that both prioritization methods improved response rates and led to significant reductions in calling effort per completed case, with the "flagging" approach working particularly well. Given the additional costs associated with the "tailored" method, the results of our experiment provide support for a "hybrid" case prioritization approach that combines optimal features of these two methods.